1 |
Where Word and World Meet: Intuitive Correspondence Between Visual and Linguistic Symmetry
|
|
|
|
In: Proceedings of the Annual Meeting of the Cognitive Science Society, vol 43, iss 43 (2021)
|
|
BASE
|
|
Show details
|
|
2 |
Now You Hear Me, Later You Don’t: The Immediacy of Linguistic Computation and the Representation of Speech ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
Now You Hear Me, Later You Don’t: The Immediacy of Linguistic Computation and the Representation of Speech ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
sj-pdf-1-pss-10.1177_0956797620968787 – Supplemental material for Now You Hear Me, Later You Don’t: The Immediacy of Linguistic Computation and the Representation of Speech ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
sj-pdf-1-pss-10.1177_0956797620968787 – Supplemental material for Now You Hear Me, Later You Don’t: The Immediacy of Linguistic Computation and the Representation of Speech ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Where Word and World Meet: Intuitive Correspondence Between Visual and Linguistic Symmetry ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Where Word and World Meet: Intuitive Correspondence Between Visual and Linguistic Symmetry ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
Now you hear me, later you don’t: The Immediacy of Linguistic Computation and the Representation of Speech ...
|
|
|
|
Abstract:
(Accepted and In Press at Psychological Science) What happens to the acoustic signal after it enters the mind of a listener? Previous work demonstrates that listeners maintain intermediate representations over time. However, the internal structure of such representations—be they the acoustic-phonetic signal or more general information about the probability of possible categories—remains underspecified. We present two experiments using a novel speaker adaptation paradigm aimed at uncovering the format of speech representations. We exposed adult listeners (N=297) to a speaker whose utterances contained acoustically ambiguous information concerning phones/words and manipulated the temporal availability of disambiguating cues via visually presented text (i.e., presentation before or after each utterance). Results from a traditional phoneme categorization task showed that listeners adapt to a modified acoustic distribution when disambiguating text is provided before the audio, but not after. Results support the ...
|
|
URL: https://osf.io/wg6de/ https://dx.doi.org/10.17605/osf.io/wg6de
|
|
BASE
|
|
Hide details
|
|
9 |
Event Structure In Vision And Language
|
|
|
|
In: Publicly Accessible Penn Dissertations (2019)
|
|
BASE
|
|
Show details
|
|
10 |
Event Structure in Vision and Language
|
|
|
|
In: Dissertations available from ProQuest (2019)
|
|
BASE
|
|
Show details
|
|
11 |
Propose but verify: Fast mapping meets cross-situational word learning
|
|
|
|
BASE
|
|
Show details
|
|
|
|